Re: poor performance of loading data
От | Chris Ruprecht |
---|---|
Тема | Re: poor performance of loading data |
Дата | |
Msg-id | p05101001b846e621b81d@[192.168.0.6] обсуждение исходный текст |
Ответ на | poor performance of loading data ("Zhang, Anna" <azhang@verisign.com>) |
Список | pgsql-admin |
Anna, At 12:57 -0500 12/19/2001, Zhang, Anna wrote: >I just installed Postgres 7.1.3 on my Red Hat 7.2 linux box. We are doing >research to see how postgres doing, I used copy utility to import data from >a text file which contains 32 mils rows, it has been 26 hours passed, but >still running. My question is how postgres handles such data loading? it >commited every row? or commit point is adjustable? How? Does postgres >provide direct load to disk files like oracle? Other ways to speed up? If >loading performance can't be improved significantly, we have to go back to >oracle. Anybody can help? Thanks! Did you edit the PostGreSQL startup parameters? if not, you have a very sluggish system. You should have an ample supply of shared memory (2000 - 8000 blocks of 8K each). Also, do you have an index defined on your table? If so, drop the index, load the records, then, re-create the index(es) for the table. Speed will improve tremendously for copy commands with no index to update. creating an index for all 32M records is much faster than having to insert index values. You might find your file loaded in a fraction of the time is has used already with these optimizations applied. Without knowing more specifics, this is all I can recommend. Best regards, Chris -- Chris Ruprecht Network grunt and bit pusher extraordinaíre _________________________________________________________ Do You Yahoo!? Get your free @yahoo.com address at http://mail.yahoo.com
В списке pgsql-admin по дате отправления: